85 research outputs found
Reasoning About Strategies: On the Model-Checking Problem
In open systems verification, to formally check for reliability, one needs an
appropriate formalism to model the interaction between agents and express the
correctness of the system no matter how the environment behaves. An important
contribution in this context is given by modal logics for strategic ability, in
the setting of multi-agent games, such as ATL, ATL\star, and the like.
Recently, Chatterjee, Henzinger, and Piterman introduced Strategy Logic, which
we denote here by CHP-SL, with the aim of getting a powerful framework for
reasoning explicitly about strategies. CHP-SL is obtained by using first-order
quantifications over strategies and has been investigated in the very specific
setting of two-agents turned-based games, where a non-elementary model-checking
algorithm has been provided. While CHP-SL is a very expressive logic, we claim
that it does not fully capture the strategic aspects of multi-agent systems. In
this paper, we introduce and study a more general strategy logic, denoted SL,
for reasoning about strategies in multi-agent concurrent games. We prove that
SL includes CHP-SL, while maintaining a decidable model-checking problem. In
particular, the algorithm we propose is computationally not harder than the
best one known for CHP-SL. Moreover, we prove that such a problem for SL is
NonElementarySpace-hard. This negative result has spurred us to investigate
here syntactic fragments of SL, strictly subsuming ATL\star, with the hope of
obtaining an elementary model-checking problem. Among the others, we study the
sublogics SL[NG], SL[BG], and SL[1G]. They encompass formulas in a special
prenex normal form having, respectively, nested temporal goals, Boolean
combinations of goals and, a single goal at a time. About these logics, we
prove that the model-checking problem for SL[1G] is 2ExpTime-complete, thus not
harder than the one for ATL\star
Multi-Player Games with LDL Goals over Finite Traces
Linear Dynamic Logic on finite traces LDLf is a powerful logic for reasoning
about the behaviour of concurrent and multi-agent systems.
In this paper, we investigate techniques for both the characterisation and
verification of equilibria in multi-player games with goals/objectives
expressed using logics based on LDLf. This study builds upon a generalisation
of Boolean games, a logic-based game model of multi-agent systems where players
have goals succinctly represented in a logical way.
Because LDLf goals are considered, in the settings we study -- Reactive
Modules games and iterated Boolean games with goals over finite traces --
players' goals can be defined to be regular properties while achieved in a
finite, but arbitrarily large, trace.
In particular, using alternating automata, the paper investigates
automata-theoretic approaches to the characterisation and verification of (pure
strategy Nash) equilibria, shows that the set of Nash equilibria in
multi-player games with LDLf objectives is regular, and provides complexity
results for the associated automata constructions
Equilibrium Design for Concurrent Games
In game theory, mechanism design is concerned with the design of incentives so that a desired outcome of the game can be achieved. In this paper, we study the design of incentives so that a desirable equilibrium is obtained, for instance, an equilibrium satisfying a given temporal logic property - a problem that we call equilibrium design. We base our study on a framework where system specifications are represented as temporal logic formulae, games as quantitative concurrent game structures, and players\u27 goals as mean-payoff objectives. In particular, we consider system specifications given by LTL and GR(1) formulae, and show that implementing a mechanism to ensure that a given temporal logic property is satisfied on some/every Nash equilibrium of the game, whenever such a mechanism exists, can be done in PSPACE for LTL properties and in NP/Sigma^P_2 for GR(1) specifications. We also study the complexity of various related decision and optimisation problems, such as optimality and uniqueness of solutions, and show that the complexities of all such problems lie within the polynomial hierarchy. As an application, equilibrium design can be used as an alternative solution to the rational synthesis and verification problems for concurrent games with mean-payoff objectives whenever no solution exists, or as a technique to repair, whenever possible, concurrent games with undesirable rational outcomes (Nash equilibria) in an optimal way
Automated Temporal Equilibrium Analysis: Verification and Synthesis of Multi-Player Games
In the context of multi-agent systems, the rational verification problem is
concerned with checking which temporal logic properties will hold in a system
when its constituent agents are assumed to behave rationally and strategically
in pursuit of individual objectives. Typically, those objectives are expressed
as temporal logic formulae which the relevant agent desires to see satisfied.
Unfortunately, rational verification is computationally complex, and requires
specialised techniques in order to obtain practically useable implementations.
In this paper, we present such a technique. This technique relies on a
reduction of the rational verification problem to the solution of a collection
of parity games. Our approach has been implemented in the Equilibrium
Verification Environment (EVE) system. The EVE system takes as input a model of
a concurrent/multi-agent system represented using the Simple Reactive Modules
Language (SRML), where agent goals are represented as Linear Temporal Logic
(LTL) formulae, together with a claim about the equilibrium behaviour of the
system, also expressed as an LTL formula. EVE can then check whether the LTL
claim holds on some (or every) computation of the system that could arise
through agents choosing Nash equilibrium strategies; it can also check whether
a system has a Nash equilibrium, and synthesise individual strategies for
players in the multi-player game. After presenting our basic framework, we
describe our new technique and prove its correctness. We then describe our
implementation in the EVE system, and present experimental results which show
that EVE performs favourably in comparison to other existing tools that support
rational verification
On the Complexity of Rational Verification
Rational verification refers to the problem of checking which temporal logic
properties hold of a concurrent multiagent system, under the assumption that
agents in the system choose strategies that form a game-theoretic equilibrium.
Rational verification can be understood as a counterpart to model checking for
multiagent systems, but while classical model checking can be done in
polynomial time for some temporal logic specification languages such as CTL,
and polynomial space with LTL specifications, rational verification is much
harder: the key decision problems for rational verification are
2EXPTIME-complete with LTL specifications, even when using explicit-state
system representations. Against this background, our contributions in this
paper are threefold. First, we show that the complexity of rational
verification can be greatly reduced by restricting specifications to GR(1), a
fragment of LTL that can represent a broad and practically useful class of
response properties of reactive systems. In particular, we show that for a
number of relevant settings, rational verification can be done in polynomial
space and even in polynomial time. Second, we provide improved complexity
results for rational verification when considering players' goals given by
mean-payoff utility functions; arguably the most widely used approach for
quantitative objectives in concurrent and multiagent systems. Finally, we
consider the problem of computing outcomes that satisfy social welfare
constraints. To this end, we consider both utilitarian and egalitarian social
welfare and show that computing such outcomes is either PSPACE-complete or
NP-complete.Comment: Preprint submitted to Annals of Mathematics and Artificial
Intelligenc
Giving Instructions in Linear Temporal Logic
Our aim is to develop a formal semantics for giving instructions to taskable agents, to investigate the complexity of decision problems relating to these semantics, and to explore the issues that these semantics raise. In the setting we consider, agents are given instructions in the form of Linear Temporal Logic (LTL) formulae; the intuitive interpretation of such an instruction is that the agent should act in such a way as to ensure the formula is satisfied. At the same time, agents are assumed to have inviolable and immutable background safety requirements, also specified as LTL formulae. Finally, the actions performed by an agent are assumed to have costs, and agents must act within a limited budget. For this setting, we present a range of interpretations of an instruction to achieve an LTL task ?, intuitively ranging from "try to do this but only if you can do so with everything else remaining unchanged" up to "drop everything and get this done." For each case we present a formal pre-/post-condition semantics, and investigate the computational issues that they raise
- …